Communications Psychology
○ Springer Science and Business Media LLC
All preprints, ranked by how well they match Communications Psychology's content profile, based on 20 papers previously published here. The average preprint has a 0.01% match score for this journal, so anything above that is already an above-average fit. Older preprints may already have been published elsewhere.
Martinon, L.; Smallwood, J.; Riby, L. M.
Show abstract
Understanding transient states, like off-task mind-wandering, is assumed to be improved by capitalizing on our ability to recognize changes in our stream of thought, a process known as meta-awareness. We test this assumption by comparing mind-wandering content when noticed by the participant (self-caught) against those thoughts reported after externally initiated probes (probe-caught). Thirty-eight older and 36 younger individuals completed a cognitive task. At the same time, multiple feature descriptions of thoughts (task-relevance, temporal focus, and self-referential) were captured using self and probe-caught methods. Using a pattern-learning approach, we established that self-caught experiences produce similar but generally "noisier" estimates compared to those reported at probes. However, self-caught experiences contained more off-task characteristics relative to reports at probes. Importantly, despite reductions in off-task thought, older adults retain the ability to self-catch experiences with these features. Our study establishes self-catching ability as an essential means of revealing the detailed content of off-task states, an ability relatively well maintained into old age.
Wiederhold, B.
Show abstract
As anyone who has tried to memorize a one-hundred-digit number can attest, acquisition of numerical information typically proceeds at less than one bit/s. If human memory operated at this speed in general, even a simple conversation would not be possible. Indeed, through techniques such as the memory palace, which translate numerical information into more natural contexts, memory athletes manage considerably higher rates. This suggests that memory, in its intended environment and under training, performs at substantial speed, which can be quantified in memory competitions. Analyzing the data reveals three phenomena. First, in short-duration tasks up to 42 bit/s have been achieved. Remarkably, competitors spend most of the time on reading, indicating that they form mental associations even more rapidly. Second, record performances show a remarkable concordance across time scales: the processing speed depends on memorization time as a power law. Third, despite dramatic improvements in scores and mnemonic strategies over the last decades, the differences in information rates across memorization tasks remain remarkably consistent.
Loewinger, G.; Stensrud, M. J.; Nayak, S. M.; Yaden, D. B.; Levis, A. W.
Show abstract
In clinical trials for mental health treatments, functional unmasking (unblinding) is a widespread challenge wherein participants become aware of their assigned treatment. Unmasking is especially concerning with psychedelics, due to the near unmistakable acute effects (the "trip"), resulting in uncertainty about whether outcomes following treatment reflect true therapeutic properties of the interventions, or placebo-like effects. We present a counterfactual conceptualization of unmasking that 1) formalizes the shortcomings of many existing statistical and experimental design solutions (e.g., dose-response, active controls), and 2) demonstrates how modern causal inference approaches can be applied to isolate effects devoid of this "contamination." Our results reveal feedback mechanisms between perceived therapeutic benefits and expectancies that can render traditional methods prone to obscuring or exaggerating therapeutic benefits. Our proposal motivates trial designs and statistical methods that can be implemented to mitigate the impacts of functional unmasking.
Wang, Y.; LiMeixuan, M.; Yang, L.; Tan, L.; Li, T.; Xiao, H.; Deng, Y.; Zhang, Y.
Show abstract
Enhancing memory function is essential for daily living and cognitive health, particularly amid population aging and cognitive decline. However, current methods for memory enhancement often require specialized interventions or effortful practice. Here, we present evidence that the intrinsic memorability of images can serve as a simple, scalable, and involuntary strategy for improving associative memory. We studied its effects on associative memory in adults of distinct ages, memory retention in cognitively impaired older adults, and vocabulary learning in foreign language learners. Our results show that highly memorable images significantly enhance the recall of associated words, especially when the cue image and target word are semantically unrelated. Notably, these effects persist for at least one week and are robust across age groups and cognitive status. In a foreign language vocabulary task, pairing words with memorable images led to either improved recall accuracy or reduced learning time. These findings highlight that, in addition to being an intrinsic memory-enhancing property, image memorability also serves as a general facilitator of associative memory, via a method that is easy to adopt, requires minimal conscious effort, and can benefit the general public, particularly learners and cognitive impairment patients in educational and clinical settings.
Filipowicz, A. L. S.; Levine, J.; Piasini, E.; Tavoni, G.; Kable, J. W.; Gold, J. I.
Show abstract
Different learning strategies are thought to fall along a continuum that ranges from simple, inflexible, and fast "model-free" strategies, to more complex, flexible, and deliberative "model-based strategies". Here we show that, contrary to this proposal, strategies at both ends of this continuum can be equally flexible, effective, and time-intensive. We analyzed behavior of adult human subjects performing a canonical learning task used to distinguish between model-free and model-based strategies. Subjects using either strategy showed similarly high information complexity, a measure of strategic flexibility, and comparable accuracy and response times. This similarity was apparent despite the generally higher computational complexity of model-based algorithms and fundamental differences in how each strategy learned: model-free learning was driven primarily by observed past responses, whereas model-based learning was driven primarily by inferences about latent task features. Thus, model-free and model-based learning differ in the information they use to learn but can support comparably flexible behavior. Statement of RelevanceThe distinction between model-free and model-based learning is an influential framework that has been used extensively to understand individual- and task-dependent differences in learning by both healthy and clinical populations. A common interpretation of this distinction that model-based strategies are more complex and therefore more flexible than model-free strategies. However, this interpretation conflates computational complexity, which relates to processing resources and generally higher for model-based algorithms, with information complexity, which reflects flexibility but has rarely been measured. Here we use a metric of information complexity to demonstrate that, contrary to this interpretation, model-free and model-based strategies can be equally flexible, effective, and time-intensive and are better distinguished by the nature of the information from which they learn. Our results counter common interpretations of model-free versus model-based learning and demonstrate the general usefulness of information complexity for assessing different forms of strategic flexibility.
Michiels, M.
Show abstract
Habits in humans are commonly studied through outcome devaluation paradigms, but most existing tasks fail to capture the robustness of habitual behavior seen in animal models. I introduce two novel behavioral tasks designed to overcome these limitations. In the first task, ("shooting aliens task", n = 45), I simplified an existing instrumental learning task and implemented a novel intra-block reversal method in which stimulus positions changed unexpectedly within blocks while maintaining the same stimulus-action mappings. Participants also completed a classical devaluation phase with explicit reward changes. In the second task ("hands-attack task", n = 44), which relied on real-life avoidance behavior, devaluation was achieved by reversing reward contingencies and allowing participants to inhibit the dominant avoidance response in favor of a more effortful counterattack. Across both tasks, overtrained conditions led to more errors and longer response times after devaluation, confirming increased insensitivity to outcome change. Intra-block reversals in the shooting aliens task produced stronger habitual signatures than standard whole-block devaluation, revealing a greater cost of overriding automatic responses. In the hands-attack task, even without prior training, participants showed clear markers of habitual behavior, suggesting that real-world action patterns can replicate key features of laboratory habits. Interestingly, participants were more accurate in overriding overtrained responses when attacks were highly familiar, possibly due to enhanced perceptual processing, although this came at the cost of longer response times. These findings introduce two complementary tools that address key limitations in current paradigms: the intra-block reversal increases habit sensitivity without inflating working memory demands, while the hands-attack task captures naturalistic habit expression without artificial training, using a single, ecologically valid session. Both are suited for clinical applications, particularly where time constraints or cognitive load limit the feasibility of traditional approaches.
Eissa, T. L.; Gold, J. I.; Josic, K.; Kilpatrick, Z. P.
Show abstract
Solutions to challenging inference problems are often subject to a fundamental trade-off between bias (being systematically wrong) that is minimized with complex inference strategies and variance (being oversensitive to uncertain observations) that is minimized with simple inference strategies. However, this trade-off is based on the assumption that the strategies being considered are optimal for their given complexity and thus has unclear relevance to the frequently suboptimal inference strategies used by humans. We examined inference problems involving rare, asymmetrically available evidence, which a large population of human subjects solved using a diverse set of strategies that were suboptimal relative to the Bayesian ideal observer. These suboptimal strategies reflected an inversion of the classic bias-variance trade-off: subjects who used more complex, but imperfect, Bayesian-like strategies tended to have lower variance but high bias because of incorrect tuning to latent task features, whereas subjects who used simpler heuristic strategies tended to have higher variance because they operated more directly on the observed samples but displayed weaker, near-normative bias. Our results yield new insights into the principles that govern individual differences in behavior that depends on rare-event inference, and, more generally, about the information-processing trade-offs that are sensitive to not just the complexity, but also the optimality of the inference process.
Yamin, D.; Schmidig, F. J.; Sharon, O.; Nadu, Y.; Nir, J.; Ranganath, C.; Nir, Y.
Show abstract
Human memory is typically studied by direct questioning, and the recollection of events is investigated through verbal reports. Thus, current research confounds memory per-se with its report. Critically, the ability to investigate memory retrieval in populations with deficient verbal ability is limited. Here, using the MEGA (Memory Episode Gaze Anticipation) paradigm, we show that monitoring anticipatory gaze using eye tracking can quantify memory retrieval without verbal report. Upon repeated viewing of movie clips, eye gaze patterns anticipating salient events can quantify their memory traces seconds before these events appear on the screen. A series of five experiments with a total of 145 participants using either tailor-made animations or naturalistic movies consistently reveal that accumulated gaze proximity to the event can index memory. Machine learning-based classification can identify whether a given viewing is associated with memory for the event based on single-trial data of gaze features. Detailed comparison to verbal reports establishes that anticipatory gaze marks recollection of associative memory about the event, whereas pupil dilation captures familiarity. Finally, anticipatory gaze reveals beneficial effects of sleep on memory retrieval without verbal report, illustrating its broad applicability across cognitive research and clinical domains.
Grandy, T. H.; Lindenberger, U.; Werkle-Bergner, M.
Show abstract
The present study examined whether a cognitive process model that is inferred based on group data holds, and is meaningful, at the level of the individual person. Investigation of this issue is tantamount to questioning that the same set and configuration of cognitive processes is present within all individuals, a usually untested assumption in standard group-based experiments. Search from memory as assessed with the Sternberg memory scanning paradigm is among the most widely studied phenomena in cognitive psychology. According to the original memory scanning model, search is serial and exhaustive. Here we critically examined the validity of this model across individuals and practice. 32 younger adults completed 1488 trials of the Sternberg task distributed over eight sessions. In the first session, group data followed the pattern predicted by the original model, replicating earlier findings. However, data from the first session were not sufficiently reliable for identifying whether each individual complied with the serial exhaustive search model. In sessions six to eight, when participants performed near asymptotic levels of performance, between-person differences were reliable, group data deviated substantially from the original memory search model, and the model fit only 13 of the 32 participants data. Our findings challenge the proposition that one general memory search process exists within a group of healthy younger adults, and questions the testability of this proposition at the individual level in single-session experiments. Implications for cognitive psychology and cognitive neuroscience are discussed with reference to earlier work emphasizing the explicit consideration of potentially existent individual differences.
Wu, A.
Show abstract
We propose a computational instantiation of three cognitive stages from the Dot-Linear- Network (DLN) framework, grounded in a compression-efficiency thesis. DLN stages are characterized as graph-structured belief-dependency representations used to evaluate options: Dot as no persistent belief graph (reactive policies with negligible internal state), Linear as a null graph over option beliefs (K independent option estimates with no information sharing), and Network as shared latent structure (a bipartite factor graph in which F latent factors connect to K options), augmented by a temporal exposure state and an explicit structural learning cycle (hypothesis [->] test [->] update/expand). We distinguish two compression targets--option-factor structure (shared components in expected outcomes) and stakes-factor structure (shared drivers of consequence-bearing exposures)-- whose intersection yields jointly efficient actions that simultaneously improve expected outcomes and marginal exposure impact. In a bandit-like simulation (100 seeds, K [isin] { 20, 50, 100, 200 }, F =5), Network policies dominate Linear policies in cost-adjusted utility at large K, with the empirical crossover occurring much earlier than an analytic cost-only prediction (K* = F + cmeta/cparam), revealing that the advantage is primarily statistical (shrinkage-like estimation gains from factor pooling) rather than purely computational. Under stakes, all non-DLN agents--including Linear-Plus agents with identical factor structure and Network-standard agents with hierarchical Bayesian learning--collapse due to unmodeled cumulative exposure, while Network-DLN maintains positive utility. Within-stage consistency tests (two algorithmically distinct agents per stage) confirm that the collapse pattern is determined by representational topology, not algorithmic choice. These results evaluate internal consistency of a DLN-to-computation mapping under explicit assumptions; they do not validate a developmental theory in humans.
Arnold, D. H.; Johnston, A.; Adie, J.; Yarrow, K.
Show abstract
Signal-detection theory (SDT) is one of the most popular frameworks for analyzing data from studies of human behavior - including investigations of confidence. SDT-based analyses of confidence deliver both standard estimates of sensitivity (d), and a second estimate based only on high-confidence decisions - meta d. The extent to which meta d estimates fall short of d estimates is regarded as a measure of metacognitive inefficiency, quantifying the contamination of confidence by additional noise. These analyses rely on a key but questionable assumption - that repeated exposures to an input will evoke a normally-shaped distribution of perceptual experiences (the normality assumption). Here we show, via analyses inspired by an experiment and modelling, that when distributions of experiences do not conform with the normality assumption, meta d can be systematically underestimated relative to d. Our data therefore highlight that SDT-based analyses of confidence do not provide a ground truth measure of human metacognitive inefficiency. Public Significance StatementSignal-detection theory is one of the most popular frameworks for analysing data from experiments of human behaviour - including investigations of confidence. The authors show that the results of these analyses cannot be regarded as ground truth. If a key assumption of the framework is inadvertently violated, analyses can encourage conceptually flawed conclusions.
Wang, S.; Karabay, A.; Akyürek, E.
Show abstract
The nature of working memory (WM) limitations has been a topic of long-standing debate, with several models proposed to elucidate this issue. In this study, we conducted a systematic comparison of seven visual WM models to assess their ability to account for target consolidation during the attentional blink (AB). The AB phenomenon refers to where participants often fail to encode the second of two targets when there is a short time interval of [~]500 msec or less between them, providing an opportunity to evaluate commensurate WM limitations. Despite the growing consensus on the applicability of some WM models, such as the standard mixture model and the variable precision model, to the AB domain, no study has systematically evaluated these models in this context. We compared the performance of seven widely adopted visual WM models in four different AB datasets, drawn from three separate laboratories. We fitted each model and computed the Akaike Information Criterion (AIC) values at an individual level, across different conditions and experiments, based on which we compared the models. Slot-family models most often minimized AIC for second-target reports at short lags, while variable-precision models improved at longer lags and with color targets, indicating predominantly discrete consolidation during the AB, with feature- and lag-dependent graded components. These patterns imply that failure-to-encode (guessing) dominates over low-precision encoding, except when feature content or lag affords partial consolidation, refining theories of episodic tokenization and WM consolidation during the AB.
Elman, J. A.; Buchholz, E.; Chen, R.; Sanderson-Cimino, M.; Bell, T. R.; Whitsel, N.; Bangen, K. J.; Cronin-Golomb, A.; Dale, A. M.; Eyler, L. T.; Fennema-Notestine, C.; Gillespie, N. A.; Granholm, E. L.; Gustavson, D. E.; Hagler, D. J.; Hauger, R. L.; Jacobs, D. M.; Jak, A. J.; Logue, M. W.; Lyons, M. J.; McKenzie, R. E.; Neale, M. C.; Rissman, R. E.; Reynolds, C. A.; Toomey, R.; Wingfield, A.; Xian, H.; Tu, X. M.; Franz, C. E.; Kremen, W. S.; Panizzon, M. S.
Show abstract
INTRODUCTIONRepeated cognitive testing can boost scores due to practice effects (PEs). It remains unclear whether PEs persist across multiple follow-ups and long durations. We examined PEs across multiple assessments from midlife to old age in a nonclinical sample. METHODMen (N=1,608) in the Vietnam Era Twin Study of Aging (VETSA) underwent neuropsychological assessment across 4 waves from mean age 56 to 74. We leveraged age-matched attrition-replacement (AR) participants to estimate PEs at each wave. We compared cognitive trajectories and prevalence of mild cognitive impairment (MCI) using unadjusted versus PE-adjusted scores. RESULTSAcross follow-ups, a range of 7-12 out of 30 measures demonstrated significant PEs, especially in episodic memory and visuospatial domains. Adjusting for PEs resulted in steeper cognitive decline with up to 29% higher MCI prevalence. DISCUSSIONPEs persist across multiple assessments and decades. The AR-participant method provides accurate sample-specific PE estimates that enable significantly earlier detection of MCI.
Walkowiak, S.; Coutrot, A.; Wiener, J. M.; Hornberger, M.; Manley, E.; Spiers, H. J.
Show abstract
Overconfidence and the Dunning-Kruger effect have been reported in many cognitive domains. However, there is little examination in the field of spatial navigation. Here, we examined overconfidence in navigation ability in 376,836 participants from 46 countries. We tested navigation using our virtual wayfinding task in the app-based video game Sea Hero Quest, examined self-ratings of navigation ability and how many game levels participants had played before they dropped out. The main goal of this analysis was to investigate how performance of overconfident participants influenced the dropout rate from our experimental task embedded in a video game. First, we measured and modelled overestimation at baseline game levels. Age was found to be the strongest predictor of overestimation across the entire sample. Higher age was associated with increased overconfidence in all 46 countries and 11 distinct cultural clusters. Female participants were more likely to overestimate their performance across most of the life course (19-59 years old), however older men (60-70 years old) displayed highest overconfidence amongst all age-gender groups. Overconfidence also varied widely across countries. Secondly, we estimated performance on follow-up game levels for those who were previously identified as overconfident and we found that those who were more likely to display the Dunning-Kruger effect (i.e., poor performance while being overconfident) were predominantly female and older participants. Finally, survival analysis methods with time-dependent covariates revealed that poor wayfinding performance, while being overconfident, was one of the strongest predictors of task dropout. This Dunning-Kruger bias on participants dropout existed universally across all countries in our data.
Ganel, T.; Mazuz, Y.; Algom, D.; Goodale, M. A.
Show abstract
Apparent facial age plays an important role in social interactions, serving a meaningful marker of biological aging. Although both humans and AIs achieve reasonable accuracy in estimating age from a persons face, performance remains imprecise, leaving substantial room for errors and biases. Drawing on principles from classical psychophysics, we demonstrate that the existing literature on age estimation suffers from a critical theoretical and methodological shortcoming, which casts doubt on established findings. We show that the conventional measure used to benchmark the accuracy of human and AI performance is fundamentally confounded by response bias. Consequently, we introduce a novel measure that eliminates this confound. A revised framework based on simulated data, reanalysis of existing data, and new experimental results, reveals fresh insights into how facial age is processed by humans and AIs. Our structure opens up new directions for future research and applications in the study of aging.
Völler, J.; Linde-Domingo, J.; Gonzalez-Garcia, C.
Show abstract
Suddenly finding the solution to a problem after a period of impasse often comes with a feeling of insight. This subjective experience is proposed to arise as a consequence of prediction errors. Accordingly, previous studies have revealed that more incorrect initial predictions result in more intense insights. Crucially however, prominent models of Bayesian inference suggest levels of computationally-defined surprise are not a simple feature of distance between predictions and inputs, but also their precision or certainty. Yet, how these two factors interact to give rise to insight experiences remains unknown. In this pre-registered study, participants were exposed to ambiguous images while they tried to guess the correct label of the image (to derive prediction accuracy) and rated their confidence in that label (for prediction uncertainty). We then measured the intensity of their insight when a solution was given. As predicted, we found that the intensity of insight was a result of both the prediction accuracy and the uncertainty awarded to it. More specifically, when initial predictions were far from the true label, those made with lower confidence induced weaker insights, while the opposite pattern was observed when predictions were closer to the reality. Trial-by-trial estimations of prediction errors from participants responses closely mirrored insight ratings. Finally, we analysed data from two additional independent datasets with different modalities and setups and replicated the interaction between prediction accuracy and uncertainty on the intensity of insight. Altogether, these findings suggest that insight experiences are read out from prediction errors and highlight the key role of uncertainty in characterising this relationship.
Cao, L.
Show abstract
Temporal binding has been understood as an illusion in timing judgement. When an action triggers an outcome (e.g. a sound) after a brief delay, the action is reported to occur later than if the outcome does not occur, and the outcome is reported to occur earlier than a similar outcome not caused by an action. We show here that an attention mechanism underlies the seeming illusion of timing judgement. In one method, participants watch a rotating clock hand and report event times by noting the clock hand position when the event occurs. We find that visual spatial attention is critically involved in shaping event time reports made in this way. This occurs because action and outcome events result in shifts of attention around the clock rim, thereby biasing the perceived location of the clock hand. Using a probe detection task to measure attention, we show a difference in the distribution of visual spatial attention between a single-event condition (sound only or action only) and a two-event agency condition (action plus sound). Participants accordingly report the timing of the same event (the sound or the action) differently in the two conditions: spatial attentional shifts masquerading as temporal binding. Furthermore, computational modelling based on the attention measure can reproduce the temporal binding effect. Studies that use time judgement as an implicit marker of voluntary agency should first discount the artefactual changes in event timing reports that actually reflect differences in spatial attention. The study also has important implications for related results in mental chronometry obtained with the clock-like method since Wundt, as attention may well be a critical confounding factor in the interpretation of these studies.
Mu, J.; Preston, A. R.; Huth, A. G.
Show abstract
Humans do not remember all experiences uniformly. We remember certain moments better than others, and central gist better than detail. Current theories focus exclusively on surprise to explain why some moments are better remembered, and do not explain gist memory. We propose that humans uniformly sample incoming information in time, which explains both non-uniform memory and gist. Rather than surprise, this model predicts that the mutual information between a given moment and the rest of the experience drives memory. To test this model, participants listened to narrative stories and recalled them immediately afterward. Using large language models to quantify the information structure of narrative stories and participants recall, we found that our parsimonious uniform sampling model explained memory better than earlier theories. These findings suggest an alternative, simpler account of human memory that does not rely on costly feedback mechanisms for prioritizing encoding of specific information.
HERCE CASTANON, S.; Cardoso-Leite, P.; Green, C. S.; Altarelli, I.; Schrater, P.; Bavelier, D.
Show abstract
What role do generative models play in generalization of learning in humans? Our novel multi-task prediction paradigm--where participants complete four sequence learning tasks, each being a different instance of a common generative family--allows the separate study of within-task learning (i.e., finding the solution to each of the tasks), and across-task learning (i.e., learning a task differently because of past experiences). The very first responses participants make in each task are not yet affected by within-task learning and thus reflect their priors. Our results show that these priors change across successive tasks, increasingly resembling the underlying generative family. We conceptualize multi-task learning as arising from a mixture-of-generative-models learning strategy, whereby participants simultaneously entertain multiple candidate models which compete against each other to explain the experienced sequences. This framework predicts specific error patterns, as well as a gating mechanism for learning, both of which are observed in the data.
Rybansky, F.; Rahmaniboldaji, S.; Gilbert, A.; Guerin, F.; Hurlbert, A. C.; Vuong, Q. C.
Show abstract
Humans recognize everyday actions without conscious effort despite challenges such as poor viewing conditions and visual similarity between actions. Yet the visual features contributing to action recognition remain unclear. To address this, we combined semantic modelling and feature reduction methods to identify critical features for recognizing actions from challenging egocentric perspectives. We first identified egocentric action videos from home environments that a motion-focused action classification network could correctly classify (Easy videos) or not (Hard videos). In Experiment 1, participants (N=136) labelled the action and object in the videos. Using a language model framework, we derived human ground truth labels for each video and quantified its recognition consistency based on semantic similarity. Participants recognized actions and objects in Easy videos more consistently than in Hard videos. In Experiment 2, we recursively reduced the Easy and Hard videos with high recognition consistency to extract minimal recognizable configurations (MIRCs), in which any further spatial or temporal reductions disrupted recognition. The data was collected using a large-scale online study (N=4360). We extracted information related to the hand, objects, scene background and visual features (e.g., orientation or motion signals) from the 474 MIRCs. Binary classification showed that recognition was disrupted when regions containing the manipulated object and strong orientation signals were removed, while temporal reduction by frame-scrambling disrupted recognition in 73% of MIRCs. The active hand had some marginal contribution. Our results highlight the importance of both mid- and high-level information for egocentric action recognition and link hierarchical feature theories with naturalistic human perception.